Smoothed Analysis of the 2-Opt Heuristic for the TSP: Polynomial Bounds for Gaussian Noise

نویسندگان

  • Bodo Manthey
  • Rianne Veenstra
چکیده

The 2-opt heuristic is a very simple local search heuristic for the traveling salesman problem. While it usually converges quickly in practice, its running-time can be exponential in the worst case. In order to explain the performance of 2-opt, Englert, Röglin, and Vöcking (Algorithmica, to appear) provided a smoothed analysis in the so-called one-step model on d-dimensional Euclidean instances. However, translating their results to the classical model of smoothed analysis, where points are perturbed by Gaussian distributions with standard deviation σ, yields a bound that is only polynomial in n and 1/σ. We prove bounds that are polynomial in n and 1/σ for the smoothed running-time with Gaussian perturbations. In particular our analysis for Euclidean distances is much simpler than the existing smoothed analysis. 1 2-Opt and Smoothed Analysis The traveling salesman problem (TSP) is one of the classical combinatorial optimization problems. Euclidean TSP is the following variant: given points X ⊆ [0, 1], find the shortest Hamiltonian cycle that visits all points in X (also called a tour). Even this restricted variant is NP-hard for d ≥ 2 [16]. We consider Euclidean TSP with Manhattan and Euclidean distances as well as squared Euclidean distances to measure the distances between points. For the former two, there exist polynomial-time approximation schemes (PTAS) [1, 14]. The latter, which has applications in power assignment problems for wireless networks [8], admits a PTAS for d = 2 and is APX-hard for d ≥ 3 [15]. As it is unlikely that there are efficient algorithms for solving Euclidean TSP optimally, heuristics have been developed in order to find near-optimal solutions quickly. One very simple and popular heuristic is 2-opt: starting from an initial tour, we iteratively replace two edges by two other edges to obtain a shorter tour until we have found a local optimum. Experiments indicate that 2-opt converges to near-optimal solutions quite quickly [9, 10], but its worst-case performance is bad: the worst-case running-time is exponential even for d = 2 [7] and the approximation ratio can be Ω(log n/ log log n) for Euclidean instances [5]. An alternative to worst-case analysis is average-case analysis, where the expected performance with respect to some probability distribution is measured. The average-case running-time for Euclidean instances and the average-case approximation ratio for non-metric instances of 2-opt were analyzed [4–6,11]. However, while worst-case analysis is often too pessimistic because it is dominated by c © Springer – ISAAC 2013 artificial instances that are rarely encountered in practice, average-case analysis is dominated by random instances, which have often very special properties with high probability that they do not share with typical instances. In order to overcome the drawbacks of both worst-case and average-case analysis and to explain the performance of the simplex method, Spielman and Teng invented smoothed analysis [17]: an adversary specifies an instance, and then this instance is slightly randomly perturbed. The smoothed performance is the expected performance, where the expected value is taken over the random perturbation. The underlying assumption is that real-world instances are often subjected to a small amount of random noise, which can, e.g., come from measurement or rounding errors. Smoothed analysis often allows more realistic conclusions about the performance than worst-case or average-case analysis. Since its invention, it has been applied successfully to explain the performance of a variety of algorithms [12,18]. Englert, Röglin, and Vöcking [7] provided a smoothed analysis of 2-opt in order to explain its performance. They used the one-step model : an adversary specifies n density functions f1, . . . , fn : [0, 1] d → [0, φ]. Then the n points x1, . . . , xn are drawn independently according to the densities f1, . . . , fn, respectively. Here, φ is the perturbation parameter. If φ = 1, then the only possibility is the uniform distribution on [0, 1], and we obtain an average-case analysis. The larger φ, the more powerful the adversary. Englert et al. [7] proved that the expected running-time of 2-opt is O(nφ log n) and O(n 1 3φ 8 3 log(nφ)) for Manhattan and Euclidean distances, respectively. These bounds can be improved slightly by choosing the initial tour with an insertion heuristic. However, if we transfer these bounds to the classical model of points perturbed by Gaussian distributions of standard deviation σ, we obtain bounds that are polynomial in n and 1/σ. This is because the maximum density of a d-dimensional Gaussian with standard deviation σ is Θ(σ−d). While this is polynomial for any fixed d, it is unsatisfactory that the degree of the polynomial depends on d. Our Contribution. We provide a smoothed analysis of the running-time of 2opt in the classical model, where points in [0, 1] are perturbed by independent Gaussian distributions of standard deviation σ. The bounds that we prove for Gaussian perturbations are polynomial in n and 1/σ, and the degree of the polynomial is independent of d. As distance measures, we consider Manhattan (Section 3), Euclidean (Section 5), and squared Euclidean distances (Section 4). The analysis for Manhattan distances is a straightforward adaptation of the existing analysis by Englert et al. However, while the degree of the polynomial in n is independent of d in our bound, we still have a factor in the bound that is exponential in d. Our analysis for Euclidean distances is considerably simpler than the one by Englert et al., which is rather technical and takes more than 20 pages [7]. The analysis for squared Euclidean distances is, to our knowledge, not preceded by a smoothed analysis in the one-step model. Because of the nice properties of squared Euclidean distances and Gaussian perturbations, this smoothed

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards Understanding the Smoothed Approximation Ratio of the 2-Opt Heuristic

The 2-Opt heuristic is a very simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor. In an attempt to explain the approximation performance of 2-Opt, we analyze the smoothed approximation ratio of 2-Opt. We obtain a bound of O(log(1/σ)) for the smoo...

متن کامل

A Hybrid Modified Meta-heuristic Algorithm for Solving the Traveling Salesman Problem

The traveling salesman problem (TSP) is one of the most important combinational optimization problems that have nowadays received much attention because of its practical applications in industrial and service problems. In this paper, a hybrid two-phase meta-heuristic algorithm called MACSGA used for solving the TSP is presented. At the first stage, the TSP is solved by the modified ant colony s...

متن کامل

On the Smoothed Approximation Ratio of the 2-Opt Heuristic for the TSP

The 2-Opt heuristic is a simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor. In an attempt to explain the approximation performance of 2-Opt, we prove an upper bound of exp(O( √ log(1/σ))) for the smoothed approximation ratio of 2-Opt. As a lower...

متن کامل

Capacity Bounds and High-SNR Capacity of the Additive Exponential Noise Channel With Additive Exponential Interference

Communication in the presence of a priori known interference at the encoder has gained great interest because of its many practical applications. In this paper, additive exponential noise channel with additive exponential interference (AENC-AEI) known non-causally at the transmitter is introduced as a new variant of such communication scenarios‎. First, it is shown that the additive Gaussian ch...

متن کامل

The Smoothed Number of Pareto Optimal Solutions in Bicriteria Integer Optimization

A well established heuristic approach for solving various bicriteria optimization problems is to enumerate the set of Pareto optimal solutions, typically using some kind of dynamic programming approach. The heuristics following this principle are often successful in practice. Their running time, however, depends on the number of enumerated solutions, which can be exponential in the worst case. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013